首页> 外文OA文献 >Semi-supervised Dictionary Learning Based on Hilbert-Schmidt Independence Criterion
【2h】

Semi-supervised Dictionary Learning Based on Hilbert-Schmidt Independence Criterion

机译:基于Hilbert-schmidt的半监督词典学习   独立标准

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper, a novel semi-supervised dictionary learning and sparserepresentation (SS-DLSR) is proposed. The proposed method benefits from thesupervisory information by learning the dictionary in a space where thedependency between the data and class labels is maximized. This maximization isperformed using Hilbert-Schmidt independence criterion (HSIC). On the otherhand, the global distribution of the underlying manifolds were learned from theunlabeled data by minimizing the distances between the unlabeled data and thecorresponding nearest labeled data in the space of the dictionary learned. Theproposed SS-DLSR algorithm has closed-form solutions for both the dictionaryand sparse coefficients, and therefore does not have to learn the twoiteratively and alternately as is common in the literature of the DLSR. Thismakes the solution for the proposed algorithm very fast. The experimentsconfirm the improvement in classification performance on benchmark datasets byincluding the information from both labeled and unlabeled data, particularlywhen there are many unlabeled data.
机译:本文提出了一种新颖的半监督词典学习与稀疏表示(SS-DLSR)。所提出的方法通过在空间中最大化数据和类标签之间的依赖性的空间中学习字典而受益于监督信息。使用Hilbert-Schmidt独立性标准(HSIC)进行此最大化。另一方面,通过最小化在学习的字典空间中的未标记数据和对应的最接近标记数据之间的距离,从未标记数据中学习了基础流形的全局分布。所提出的SS-DLSR算法对于字典系数和稀疏系数都具有封闭形式的解决方案,因此不必像DLSR文献中常见的那样反复学习和交替学习。这使得所提出算法的解决方案非常快。通过包含来自标记和未标记数据的信息,特别是当存在许多未标记数据时,实验证实了基准数据集分类性能的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号